20 research outputs found

    Social Attention: Modeling Attention in Human Crowds

    Full text link
    Robots that navigate through human crowds need to be able to plan safe, efficient, and human predictable trajectories. This is a particularly challenging problem as it requires the robot to predict future human trajectories within a crowd where everyone implicitly cooperates with each other to avoid collisions. Previous approaches to human trajectory prediction have modeled the interactions between humans as a function of proximity. However, that is not necessarily true as some people in our immediate vicinity moving in the same direction might not be as important as other people that are further away, but that might collide with us in the future. In this work, we propose Social Attention, a novel trajectory prediction model that captures the relative importance of each person when navigating in the crowd, irrespective of their proximity. We demonstrate the performance of our method against a state-of-the-art approach on two publicly available crowd datasets and analyze the trained attention model to gain a better understanding of which surrounding agents humans attend to, when navigating in a crowd

    Modeling Cooperative Navigation in Dense Human Crowds

    Full text link
    For robots to be a part of our daily life, they need to be able to navigate among crowds not only safely but also in a socially compliant fashion. This is a challenging problem because humans tend to navigate by implicitly cooperating with one another to avoid collisions, while heading toward their respective destinations. Previous approaches have used hand-crafted functions based on proximity to model human-human and human-robot interactions. However, these approaches can only model simple interactions and fail to generalize for complex crowded settings. In this paper, we develop an approach that models the joint distribution over future trajectories of all interacting agents in the crowd, through a local interaction model that we train using real human trajectory data. The interaction model infers the velocity of each agent based on the spatial orientation of other agents in his vicinity. During prediction, our approach infers the goal of the agent from its past trajectory and uses the learned model to predict its future trajectory. We demonstrate the performance of our method against a state-of-the-art approach on a public dataset and show that our model outperforms when predicting future trajectories for longer horizons.Comment: Accepted at ICRA 201

    Model Learning for Look-ahead Exploration in Continuous Control

    Full text link
    We propose an exploration method that incorporates look-ahead search over basic learnt skills and their dynamics, and use it for reinforcement learning (RL) of manipulation policies . Our skills are multi-goal policies learned in isolation in simpler environments using existing multigoal RL formulations, analogous to options or macroactions. Coarse skill dynamics, i.e., the state transition caused by a (complete) skill execution, are learnt and are unrolled forward during lookahead search. Policy search benefits from temporal abstraction during exploration, though itself operates over low-level primitive actions, and thus the resulting policies does not suffer from suboptimality and inflexibility caused by coarse skill chaining. We show that the proposed exploration strategy results in effective learning of complex manipulation policies faster than current state-of-the-art RL methods, and converges to better policies than methods that use options or parametrized skills as building blocks of the policy itself, as opposed to guiding exploration. We show that the proposed exploration strategy results in effective learning of complex manipulation policies faster than current state-of-the-art RL methods, and converges to better policies than methods that use options or parameterized skills as building blocks of the policy itself, as opposed to guiding exploration.Comment: This is a pre-print of our paper which is accepted in AAAI 201

    Modeling and Learning of Complex Motor Tasks: A Case Study with Robot Table Tennis

    Get PDF
    Most tasks that humans need to accomplished in their everyday life require certain motor skills. Although most motor skills seem to rely on the same elementary movements, humans are able to accomplish many different tasks. Robots, on the other hand, are still limited to a small number of skills and depend on well-defined environments. Modeling new motor behaviors is therefore an important research area in robotics. Computational models of human motor control are an essential step to construct robotic systems that are able to solve complex tasks in a human inhabited environment. These models can be the key for robust, efficient, and human-like movement plans. In turn, the reproduction of human-like behavior on a robotic system can be also beneficial for computational neuroscientists to verify their hypotheses. Although biomimetic models can be of great help in order to close the gap between human and robot motor abilities, these models are usually limited to the scenarios considered. However, one important property of human motor behavior is the ability to adapt skills to new situations and to learn new motor skills with relatively few trials. Domain-appropriate machine learning techniques, such as supervised and reinforcement learning, have a great potential to enable robotic systems to autonomously learn motor skills. In this thesis, we attempt to model and subsequently learn a complex motor task. As a test case for a complex motor task, we chose robot table tennis throughout this thesis. Table tennis requires a series of time critical movements which have to be selected and adapted according to environmental stimuli as well as the desired targets. We first analyze how humans play table tennis and create a computational model that results in human-like hitting motions on a robot arm. Our focus lies on generating motor behavior capable of adapting to variations and uncertainties in the environmental conditions. We evaluate the resulting biomimetic model both in a physically realistic simulation and on a real anthropomorphic seven degrees of freedom Barrett WAM robot arm. This biomimetic model based purely on analytical methods produces successful hitting motions, but does not feature the flexibility found in human motor behavior. We therefore suggest a new framework that allows a robot to learn cooperative table tennis from and with a human. Here, the robot first learns a set of elementary hitting movements from a human teacher by kinesthetic teach-in, which is compiled into a set of motor primitives. To generalize these movements to a wider range of situations we introduce the mixture of motor primitives algorithm. The resulting motor policy enables the robot to select appropriate motor primitives as well as to generalize between them. Furthermore, it also allows to adapt the selection process of the hitting movements based on the outcome of previous trials. The framework is evaluated both in simulation and on a real Barrett WAM robot. In consecutive experiments, we show that our approach allows the robot to return balls from a ball launcher and furthermore to play table tennis with a human partner. Executing robot movements using a biomimetic or learned approach enables the robot to return balls successfully. However, in motor tasks with a competitive goal such as table tennis, the robot not only needs to return the balls successfully in order to accomplish the task, it also needs an adaptive strategy. Such a higher-level strategy cannot be programed manually as it depends on the opponent and the abilities of the robot. We therefore make a first step towards the goal of acquiring such a strategy and investigate the possibility of inferring strategic information from observing humans playing table tennis. We model table tennis as a Markov decision problem, where the reward function captures the goal of the task as well as knowledge on effective elements of a basic strategy. We show how this reward function, and therefore the strategic information can be discovered with model-free inverse reinforcement learning from human table tennis matches. The approach is evaluated on data collected from players with different playing styles and skill levels. We show that the resulting reward functions are able to capture expert-specific strategic information that allow to distinguish the expert among players with different playing skills as well as different playing styles. To summarize, in this thesis, we have derived a computational model for table tennis that was successfully implemented on a Barrett WAM robot arm and that has proven to produce human-like hitting motions. We also introduced a framework for learning a complex motor task based on a library of demonstrated hitting primitives. To select and generalize these hitting movements we developed the mixture of motor primitives algorithm where the selection process can be adapted online based on the success of the synthesized hitting movements. The setup was tested on a real robot, which showed that the resulting robot table tennis player is able to play a cooperative game against an human opponent. Finally, we could show that it is possible to infer basic strategic information in table tennis from observing matches of human players using model-free inverse reinforcement learning

    Autonomy Infused Teleoperation with Application to BCI Manipulation

    Full text link
    Robot teleoperation systems face a common set of challenges including latency, low-dimensional user commands, and asymmetric control inputs. User control with Brain-Computer Interfaces (BCIs) exacerbates these problems through especially noisy and erratic low-dimensional motion commands due to the difficulty in decoding neural activity. We introduce a general framework to address these challenges through a combination of computer vision, user intent inference, and arbitration between the human input and autonomous control schemes. Adjustable levels of assistance allow the system to balance the operator's capabilities and feelings of comfort and control while compensating for a task's difficulty. We present experimental results demonstrating significant performance improvement using the shared-control assistance framework on adapted rehabilitation benchmarks with two subjects implanted with intracortical brain-computer interfaces controlling a seven degree-of-freedom robotic manipulator as a prosthetic. Our results further indicate that shared assistance mitigates perceived user difficulty and even enables successful performance on previously infeasible tasks. We showcase the extensibility of our architecture with applications to quality-of-life tasks such as opening a door, pouring liquids from containers, and manipulation with novel objects in densely cluttered environments

    Modeling and Learning of Complex Motor Tasks: A Case Study with Robot Table Tennis

    No full text
    Most tasks that humans need to accomplished in their everyday life require certain motor skills. Although most motor skills seem to rely on the same elementary movements, humans are able to accomplish many different tasks. Robots, on the other hand, are still limited to a small number of skills and depend on well-defined environments. Modeling new motor behaviors is therefore an important research area in robotics. Computational models of human motor control are an essential step to construct robotic systems that are able to solve complex tasks in a human inhabited environment. These models can be the key for robust, efficient, and human-like movement plans. In turn, the reproduction of human-like behavior on a robotic system can be also beneficial for computational neuroscientists to verify their hypotheses. Although biomimetic models can be of great help in order to close the gap between human and robot motor abilities, these models are usually limited to the scenarios considered. However, one important property of human motor behavior is the ability to adapt skills to new situations and to learn new motor skills with relatively few trials. Domain-appropriate machine learning techniques, such as supervised and reinforcement learning, have a great potential to enable robotic systems to autonomously learn motor skills. In this thesis, we attempt to model and subsequently learn a complex motor task. As a test case for a complex motor task, we chose robot table tennis throughout this thesis. Table tennis requires a series of time critical movements which have to be selected and adapted according to environmental stimuli as well as the desired targets. We first analyze how humans play table tennis and create a computational model that results in human-like hitting motions on a robot arm. Our focus lies on generating motor behavior capable of adapting to variations and uncertainties in the environmental conditions. We evaluate the resulting biomimetic model both in a physically realistic simulation and on a real anthropomorphic seven degrees of freedom Barrett WAM robot arm. This biomimetic model based purely on analytical methods produces successful hitting motions, but does not feature the flexibility found in human motor behavior. We therefore suggest a new framework that allows a robot to learn cooperative table tennis from and with a human. Here, the robot first learns a set of elementary hitting movements from a human teacher by kinesthetic teach-in, which is compiled into a set of motor primitives. To generalize these movements to a wider range of situations we introduce the mixture of motor primitives algorithm. The resulting motor policy enables the robot to select appropriate motor primitives as well as to generalize between them. Furthermore, it also allows to adapt the selection process of the hitting movements based on the outcome of previous trials. The framework is evaluated both in simulation and on a real Barrett WAM robot. In consecutive experiments, we show that our approach allows the robot to return balls from a ball launcher and furthermore to play table tennis with a human partner. Executing robot movements using a biomimetic or learned approach enables the robot to return balls successfully. However, in motor tasks with a competitive goal such as table tennis, the robot not only needs to return the balls successfully in order to accomplish the task, it also needs an adaptive strategy. Such a higher-level strategy cannot be programed manually as it depends on the opponent and the abilities of the robot. We therefore make a first step towards the goal of acquiring such a strategy and investigate the possibility of inferring strategic information from observing humans playing table tennis. We model table tennis as a Markov decision problem, where the reward function captures the goal of the task as well as knowledge on effective elements of a basic strategy. We show how this reward function, and therefore the strategic information can be discovered with model-free inverse reinforcement learning from human table tennis matches. The approach is evaluated on data collected from players with different playing styles and skill levels. We show that the resulting reward functions are able to capture expert-specific strategic information that allow to distinguish the expert among players with different playing skills as well as different playing styles. To summarize, in this thesis, we have derived a computational model for table tennis that was successfully implemented on a Barrett WAM robot arm and that has proven to produce human-like hitting motions. We also introduced a framework for learning a complex motor task based on a library of demonstrated hitting primitives. To select and generalize these hitting movements we developed the mixture of motor primitives algorithm where the selection process can be adapted online based on the success of the synthesized hitting movements. The setup was tested on a real robot, which showed that the resulting robot table tennis player is able to play a cooperative game against an human opponent. Finally, we could show that it is possible to infer basic strategic information in table tennis from observing matches of human players using model-free inverse reinforcement learning
    corecore